-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
netty: disable huffman coding in headers #10563
Conversation
|
||
new Rpc(transport, headers).halfClose().waitForResponse(); | ||
|
||
Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There appears to be a race between the RPC completing/releasing this thread and the traffic counter finishing the counting of the bytes. I'm working to remove this hack.
I may have misunderstood #1670, I understood it as a general performance issues for time spent within We use headers to propagate an extensive amount of request-specific metadata (eg: special routing, failure injection information, and authentication details). Some of this metadata is serialized to binary, then base64 encoded, and passed as a single header (instead of multiple smaller headers) in the 800-1200 character range. A long standing pattern in our ecosystem is to inject these headers at a high level and propagate them through the RPC chain (instead of using an independent side-channel) so this impacts a broad range of our internal services. |
I've spoken with some others, and they agree: huffman in the datacenter doesn't add much value in the common cases. And especially based on "the header is 512+ bytes." It could be useful to turn on huffman based on the connection latency (say, >10ms means "assume cross-datacenter") but the Netty API doesn't lend it to that. The savings here aren't huge and it is expensive; the table provides the biggest savings. Let's just disable huffman unconditionally. The "Netty doesn't have an API for this in the new API" is a concern, but let's kick that can down the road. This is the least of our problems for swapping to the new API. |
…ChannelBuilder, update tests to be predictable on coded size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit wary of assuming RPCs will be exactly the same size each time they are sent. But there's no grpc-timeout
, so... yeah, they will end up being identical size. Seems okay.
I'm going to be on vacation, but this approach seems fine.
netty/src/test/java/io/grpc/netty/NettyClientTransportTest.java
Outdated
Show resolved
Hide resolved
note to merger: this PR has changed purpose (no longer about configuration), so we'll need to make sure the messages make sense when we squash and merge. |
@@ -150,7 +152,9 @@ static NettyClientHandler newHandler( | |||
Preconditions.checkArgument(maxHeaderListSize > 0, "maxHeaderListSize must be positive"); | |||
Http2HeadersDecoder headersDecoder = new GrpcHttp2ClientHeadersDecoder(maxHeaderListSize); | |||
Http2FrameReader frameReader = new DefaultHttp2FrameReader(headersDecoder); | |||
Http2FrameWriter frameWriter = new DefaultHttp2FrameWriter(); | |||
Http2HeadersEncoder encoder = new DefaultHttp2HeadersEncoder( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this not get thrown away when you call newHandler below?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is passed to the frameWriter on the next line.
I finally got time to take another pass at this 🙂 The test is now directly asserting on the data in the header and no longer suffers from the previous stability issues. Though there was some thrashing in the branch, the diff against |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, yeah, the test works reasonably just checking for the string among the bytes.
@ejona86 any planned release date for this MR? |
@anuragagarwal561994, this made it onto the 1.60.x branch. The 1.60 milestone is scheduled for next week. |
Thanks for update @ejona86 Black Friday is coming, I thought could get a boost before that 😄 but anyways, I have added some hack to get it working for our use-case. |
When operating gRPC clients which are not bandwidth constrained (eg: calling a collocated proxy), clients may want to spend less CPU on header encoding while allowing larger message sizes. This allows such clients to configure the HPACK huffman coding threshold to influence this tradeoff.
Anecdotally, when calling sidecars via gRPC, we have been setting the threshold to
Integer.MAX_VALUE
. This saves CPU on header encoding in the application and again on decoding on the sidecar.This improves the header encoding performance noted in #1670.
Thanks to @drobertduke for performing the initial research.